Goto

Collaborating Authors

 token distillation


Token Distillation: Attention-aware Input Embeddings For New Tokens

Dobler, Konstantin, Elliott, Desmond, de Melo, Gerard

arXiv.org Artificial Intelligence

New tokens can be added to solve this problem, when coupled with a good initialization for their new embeddings. This excessive tokenization not only leads to reduced performance on downstream tasks (Rust et al., 2021; Ali et al., 2024) but also increases the computational Although adding new tokens to a model's vocabulary can reduce over-tokenization, it Whenever we wish to add a new token to a pretrained model's vocabulary, this new token may The semantics of a word composed of multiple subtokens will largely not be stored in their raw input embeddings at all - but rather constructed by the Transformer's attention/feed-forward layer stack during contextualization (Elhage et al., 2022; Lad et al., 2024; We demonstrate the efficacy of our method, dubbed "Token Distillation", in Section 5. We illustrate Our experimental setup is detailed in Section 4. In summary, our contributions are as follows. We motivate our proposed method by describing the fundamental limitations of current embedding initialization methods and empirically verify our claims. Most state-of-the-art Large Language Models (LLMs) are trained using a static tokenizer, usually derived by a byte-pair encoding scheme before model training (Sennrich et al., 2016). Furthermore, Lesci et al. (2025) show that in practice, words which are not a single A solution to this problem is to modify the existing vocabulary to suit the specific needs.


FLUID: Flow-Latent Unified Integration via Token Distillation for Expert Specialization in Multimodal Learning

Cuong, Van Duc, Tam, Ta Dinh, Chinh, Tran Duc, Hanh, Nguyen Thi

arXiv.org Artificial Intelligence

Multimodal classification requires robust integration of visual and textual signals, yet common fusion strategies are brittle and vulnerable to modality-specific noise. In this paper, we present FLUID - Flow-Latent Unified Integration via Token Distillation for Expert Specialization, a principled token-level pipeline that improves cross-modal robustness and scalability. FLUID contributes three core elements: (1) Q-transforms, learnable query tokens that distill and retain salient token-level features from modality-specific backbones; (2) a two-stage fusion scheme that enforces cross-modal consistency via contrastive alignment and then performs adaptive, task-aware fusion through a gating mechanism and a Q-bottleneck that selectively compresses information for downstream reasoning; and (3) a lightweight, load-balanced Mixture-of-Experts at prediction time that enables efficient specialization to diverse semantic patterns. Extensive experiments demonstrate that FLUID attains 91% accuracy on the GLAMI-1M benchmark, significantly outperforming prior baselines and exhibiting strong resilience to label noise, long-tail class imbalance, and semantic heterogeneity. Targeted ablation studies corroborate both the individual and synergistic benefits of the proposed components, positioning FLUID as a scalable, noise-resilient solution for multimodal product classification.